|
![](/i/fill.gif) |
"Chris B" <c_b### [at] btconnect com nospam> wrote:
> Hi Mienai,
>
> I think there are a few problems with this approach.
>
> One biggy is that the images won't contain some of the information you would
> need in order to create a height field - In particular the height. You would
> need to know the distance of a point from the camera, which the colour
> information in the images doesn't give you.
> With the exception of certain shapes the colour information doesn't give you
> all of the normal information either, because a colour of <200,200> could be
> pointing slightly up or slightly down.
>
> Even if you introduced a third colour and could get good approximate normal
> information you would have to make some quite unreliable assumptions about
> the contours and smoothness of the surface to attempt to derive positional
> information from that. For example, it would still be possible to have a
> step change in the surface height that is inperceptible in the image.
>
> Regards,
> Chris B.
The way I was thinking of it was that the normal is providing the delta in
height (slope) that's why I was thinking that I could just run the image
through a derivative to get an approximate height based of some +c value
(or maybe I need to integrate I forget, should be basic calculus though,
right?). While I agree that there are going to be some
inconsistancies (specially on vertical slopes) I still think this should
provide relativly accurate results, at least accurate enough to model
surfaces with a height map.
As for not all the info be contained, maybe I didn't make myself clear or
I'm not getting what you're getting at. The y vector never points down but
since it's the normal the slope can decrease. The point <200,200> would be
pointing halfway between positive x and z with a slight incline in the y
direction.
I'm still looking of a good way to convert this.
Post a reply to this message
|
![](/i/fill.gif) |